Avoiding Pitfalls When Learning Recursive Theories
نویسندگان
چکیده
Learning systems that express theories in rst-order logic must ensure that the theories are executable and, in particular, that they do not lead to innnite recursion. This paper presents a heuristic method for preventing innnite re-cursion in the (multi-clause) deenition of a recursive relation. The method has been implemented in the latest version of foil, but could also be used with any learning method that grows clauses from ground facts by repeated specialization. Results on several examples, including Ackermann's function, are presented.
منابع مشابه
Utility of in ecursive ain Theories
We investigate the utility of explanation-based learning in recursive domain theories and examine the cost of using macro-rules in these theories. The compilation options in a recursive domain theory range from constructing partial unwindings of the recursive rules to converting recursive rules into iterative ones. We compare these options against using appropriately ordered rules in the origin...
متن کاملCan HOLL Outperform FOLL?
Learning rst-order recursive theories remains a di cult learning task in a normal Inductive Logic Programming (ILP) setting, although numerous approaches have addressed it; using Higher-order Logic (HOL) avoids having to learn recursive clauses for such a task. It is one of the areas where Higher-order Logic Learning (HOLL), which uses the power of expressivity of HOL, can be expected to improv...
متن کاملAvoiding common pitfalls when clustering biological data.
Clustering is an unsupervised learning method, which groups data points based on similarity, and is used to reveal the underlying structure of data. This computational approach is essential to understanding and visualizing the complex data that are acquired in high-throughput multidimensional biological experiments. Clustering enables researchers to make biological inferences for further experi...
متن کاملLearning Recursive Theories in the Normal ILP Setting
Induction of recursive theories in the normal ILP setting is a difficult learning task whose complexity is equivalent to multiple predicate learning. In this paper we propose computational solutions to some relevant issues raised by the multiple predicate learning problem. A separate-andparallel-conquer search strategy is adopted to interleave the learning of clauses supplying predicates with m...
متن کاملUnsupervised Transduction Grammar Induction via Minimum Description Length
We present a minimalist, unsupervised learning model that induces relatively clean phrasal inversion transduction grammars by employing the minimum description length principle to drive search over a space defined by two opposing extreme types of ITGs. In comparison to most current SMT approaches, the model learns a very parsimonious phrase translation lexicons that provide an obvious basis for...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1993